Psychology Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Strong Artificial Intelligence: Artificial Intelligence (AI) refers to computer systems emulating human-like cognitive functions. Strong AI, or Artificial General Intelligence (AGI), implies a system with human-level intelligence, capable of learning and performing any intellectual task, akin to human capabilities across diverse domains. See also Artificial Intelligence, Artificial General Intelligence, Human-level AI, Artificial consciousness.
_____________
Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments.

 
Author Concept Summary/Quotes Sources

Daniel Dennett on Strong Artificial Intelligence - Dictionary of Arguments

Brockman I 48
Strong Artificial Intelligence/Dennett: [Weizenbaum](1) could never decide which of two theses he wanted to defend: AI is impossible! or AI is possible but evil! He wanted to argue, with John Searle and Roger Penrose, that “Strong AI” is impossible, but there are no good arguments for that conclusion
Dennett: As one might expect, the defensible thesis is a hybrid: AI (Strong AI) is possible in principle but not desirable. The AI that’s practically possible is not necessarily evil - unless it is mistaken for Strong AI!
E.g. IBM’s Watson: Its victory in Jeopardy! was a genuine triumph, made possible by the formulaic restrictions of the Jeopardy! rules, but in order for it to compete, even these rules had to be revised (…).Watson is not good company, in spite of misleading ads from IBM that suggest a general conversational ability, and turning Watson into a plausibly multidimensional agent would be like turning a hand calculator into Watson. Watson could be a useful core faculty for such an agent, but more like a cerebellum or an amygdala than a mind—at best, a special-purpose subsystem that could play a big supporting role (…).
Brockman I 50
One can imagine a sort of inverted Turing Test in which the judge is on trial; until he or she can spot the weaknesses, the overstepped boundaries, the gaps in a system, no license to operate will be issued. The mental training required to achieve certification as a judge will be demanding.
Brockman I 51
We don’t need artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights, and should not have feelings that could be hurt, or be able to respond with resentment to “abuses” rained on them by inept users.(2)
Rationale/Dennett: [these agents] would not (…) share with us (..) our vulnerability or our mortality. >Robots/Dennett
.


1. Weizenbaum, J. Computer Power and Human Reason. From Judgment to Calculation. San Francisco: W. H. Freeman, 1976
2. Joanna J. Bryson, “Robots Should Be Slaves,» in Close Engci.gement with Artificial Companions,
YorickWilks, ed. (Amsterdam, The Netherlands: John Benjamins, 2010), 63—74; http:/I
www.cs .bath.ac.uk/ —jjb/ftp/Bryson-Slaves-BookO9 .html; Joanna J. Bryson, “Patiency Is Not
a Virtue: AI and the Design of Ethical Systems,” https://www.cs.bath.ac.ulc/-jjb/ftp/Bryson
Patiency-AAAISS i 6.pdf [inactive].


Dennett, D. “What can we do?”, in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press.

_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.

Dennett I
D. Dennett
Darwin’s Dangerous Idea, New York 1995
German Edition:
Darwins gefährliches Erbe Hamburg 1997

Dennett II
D. Dennett
Kinds of Minds, New York 1996
German Edition:
Spielarten des Geistes Gütersloh 1999

Dennett III
Daniel Dennett
"COG: Steps towards consciousness in robots"
In
Bewusstein, Thomas Metzinger, Paderborn/München/Wien/Zürich 1996

Dennett IV
Daniel Dennett
"Animal Consciousness. What Matters and Why?", in: D. C. Dennett, Brainchildren. Essays on Designing Minds, Cambridge/MA 1998, pp. 337-350
In
Der Geist der Tiere, D Perler/M. Wild, Frankfurt/M. 2005

Brockman I
John Brockman
Possible Minds: Twenty-Five Ways of Looking at AI New York 2019


Send Link
> Counter arguments against Dennett
> Counter arguments in relation to Strong Artificial Intelligence

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Y   Z